|
The NER model is one of a number of methods for determining the accuracy of live subtitles in television broadcasts and events that are produced using speech recognition. The three letters stand for number, edition error and recognition error. It is an alternative to the WER model (Word Error Rate) used in several countries. The model contains a formula to determine the quality of live subtitles: a NER value of 100 indicates that the content was subtitled entirely correctly. This overall score is calculated as follows: Firstly, the number of edit and recognition errors is deducted from the total number of words in the live subtitles. This number is then divided by the total number of words in the live subtitles and finally multiplied by one hundred. :. The acronyms stand for the following: * ''N'' (number) = total number of words in the live subtitles * ''E'' (Edition error) = edition error * ''R'' (Recognition error) = recognition error This measurement process is already used for public television broadcasts in several European countries like Italy and Switzerland. Other countries and authorities like British Ofcom have already expressed an interest. By way of contrast, the WER model is static, since it simply measures the textual discrepancy between that which was written and spoken. == See also == * Word error rate 抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「NER model」の詳細全文を読む スポンサード リンク
|